Goto

Collaborating Authors

 Student Enrollment


Predicting First Year Dropout from Pre Enrolment Motivation Statements Using Text Mining

Soppe, K. F. B., Bagheri, A., Nadi, S., Klugkist, I. G., Wubbels, T., Meij, L. D. N. V. Wijngaards-De

arXiv.org Artificial Intelligence

Preventing student dropout is a major challenge in higher education and it is difficult to predict prior to enrolment which students are likely to drop out and which students are likely to succeed. High School GPA is a strong predictor of dropout, but much variance in dropout remains to be explained. This study focused on predicting university dropout by using text mining techniques with the aim of exhuming information contained in motivation statements written by students. By combining text data with classic predictors of dropout in the form of student characteristics, we attempt to enhance the available set of predictive student characteristics. Our dataset consisted of 7,060 motivation statements of students enrolling in a non-selective bachelor at a Dutch university in 2014 and 2015. Support Vector Machines were trained on 75 percent of the data and several models were estimated on the test data. We used various combinations of student characteristics and text, such as TFiDF, topic modelling, LIWC dictionary. Results showed that, although the combination of text and student characteristics did not improve the prediction of dropout, text analysis alone predicted dropout similarly well as a set of student characteristics. Suggestions for future research are provided.


Reducing the Filtering Effect in Public School Admissions: A Bias-aware Analysis for Targeted Interventions

Faenza, Yuri, Gupta, Swati, Vuorinen, Aapeli, Zhang, Xuan

arXiv.org Artificial Intelligence

Problem definition: Traditionally, New York City's top 8 public schools have selected candidates solely based on their scores in the Specialized High School Admissions Test (SHSAT). These scores are known to be impacted by socioeconomic status of students and test preparation received in middle schools, leading to a massive filtering effect in the education pipeline. The classical mechanisms for assigning students to schools do not naturally address problems like school segregation and class diversity, which have worsened over the years. The scientific community, including policymakers, have reacted by incorporating group-specific quotas and proportionality constraints, with mixed results. The problem of finding effective and fair methods for broadening access to top-notch education is still unsolved. Methodology/results: We take an operations approach to the problem different from most established literature, with the goal of increasing opportunities for students with high economic needs. Using data from the Department of Education (DOE) in New York City, we show that there is a shift in the distribution of scores obtained by students that the DOE classifies as "disadvantaged" (following criteria mostly based on economic factors). We model this shift as a "bias" that results from an underestimation of the true potential of disadvantaged students. We analyze the impact this bias has on an assortative matching market. We show that centrally planned interventions can significantly reduce the impact of bias through scholarships or training, when they target the segment of disadvantaged students with average performance.


Integrating AI Tutors in a Programming Course

Ma, Iris, Martins, Alberto Krone, Lopes, Cristina Videira

arXiv.org Artificial Intelligence

RAGMan is an LLM-powered tutoring system that can support a variety of course-specific and homework-specific AI tutors. RAGMan leverages Retrieval Augmented Generation (RAG), as well as strict instructions, to ensure the alignment of the AI tutors' responses. By using RAGMan's AI tutors, students receive assistance with their specific homework assignments without directly obtaining solutions, while also having the ability to ask general programming-related questions. RAGMan was deployed as an optional resource in an introductory programming course with an enrollment of 455 students. It was configured as a set of five homework-specific AI tutors. This paper describes the interactions the students had with the AI tutors, the students' feedback, and a comparative grade analysis. Overall, about half of the students engaged with the AI tutors, and the vast majority of the interactions were legitimate homework questions. When students posed questions within the intended scope, the AI tutors delivered accurate responses 98% of the time. Within the students used AI tutors, 78% reported that the tutors helped their learning. Beyond AI tutors' ability to provide valuable suggestions, students reported appreciating them for fostering a safe learning environment free from judgment.


Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course

Chiang, Cheng-Han, Chen, Wei-Chih, Kuan, Chun-Yi, Yang, Chienchou, Lee, Hung-yi

arXiv.org Artificial Intelligence

Using large language models (LLMs) for automatic evaluation has become an important evaluation method in NLP research. However, it is unclear whether these LLM-based evaluators can be applied in real-world classrooms to assess student assignments. This empirical report shares how we use GPT-4 as an automatic assignment evaluator in a university course with 1,028 students. Based on student responses, we find that LLM-based assignment evaluators are generally acceptable to students when students have free access to these LLM-based evaluators. However, students also noted that the LLM sometimes fails to adhere to the evaluation instructions. Additionally, we observe that students can easily manipulate the LLM-based evaluator to output specific strings, allowing them to achieve high scores without meeting the assignment rubric. Based on student feedback and our experience, we provide several recommendations for integrating LLM-based evaluators into future classrooms.


More than 1,000 students pledge not to work at Google and Amazon due to Project Nimbus

Engadget

No Tech for Apartheid (NOTA), a coalition of tech workers demanding big tech companies to drop their contracts with the Israeli government, is close to reaching its goal for a campaign asking students not to work with Google and Amazon. As Wired reports, more than 1,100 people who identified themselves as STEM students and young workers have taken the pledge to refuse jobs from the companies "for powering Israel's Apartheid system and genocide against Palestinians." Based on its website, NOTA's goal is to gather 1,200 signatures for the campaign. "As young people and students in STEM and beyond, we refuse to have any part in these horrific abuses. We're joining the #NoTechForApartheid campaign to demand Amazon and Google immediately end Project Nimbus," part of the pledge reads.


Student uses AI to decipher word in ancient scroll from Herculaneum

New Scientist

The Greek word for "purple" has been extracted from a Herculaneum scroll Almost 2000 years after they were buried by the volcanic eruption of Mount Vesuvius in AD 79, scrolls from a library in the ancient Roman town of Herculaneum have begun to reveal their secrets. The tightly wrapped papyrus scrolls were charred in the disaster, which also destroyed the nearby town of Pompeii. But by studying 3D X-ray scans of the scrolls, researchers have deciphered a word on one of them: "porphyras", meaning "purple". The breakthrough came from Luke Farritor, a 21-year-old computer science student at the University of Nebraska-Lincoln. His success involved training an AI to identify nearly invisible ink-like patterns in the 3D scans. "Seeing Luke's first word was a shock," says Michael McOsker at the University College London in the UK, who was not involved in the discovery.


Explainable Disparity Compensation for Efficient Fair Ranking

Gale, Abraham, Marian, Amélie

arXiv.org Artificial Intelligence

Ranking functions that are used in decision systems often produce disparate results for different populations because of bias in the underlying data. Addressing, and compensating for, these disparate outcomes is a critical problem for fair decision-making. Recent compensatory measures have mostly focused on opaque transformations of the ranking functions to satisfy fairness guarantees or on the use of quotas or set-asides to guarantee a minimum number of positive outcomes to members of underrepresented groups. In this paper we propose easily explainable data-driven compensatory measures for ranking functions. Our measures rely on the generation of bonus points given to members of underrepresented groups to address disparity in the ranking function. The bonus points can be set in advance, and can be combined, allowing for considering the intersections of representations and giving better transparency to stakeholders. We propose efficient sampling-based algorithms to calculate the number of bonus points to minimize disparity. We validate our algorithms using real-world school admissions and recidivism datasets, and compare our results with that of existing fair ranking algorithms.


Student slapped with a £60 parking fine uses ChatGPT to write appeal - and gets penalty REVOKED

Daily Mail - Science & tech

Elon Musk wants to push technology to its absolute limit, from space travel to self-driving cars -- but he draws the line at artificial intelligence. The billionaire first shared his distaste for AI in 2014, calling it humanity's'biggest existential threat' and comparing it to'summoning the demon.' At the time, Musk also revealed he was investing in AI companies not to make money but to keep an eye on the technology in case it gets out of hand. His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as singularity. That concern is shared among many brilliant minds, including the late Stephen Hawking, who told BBC in 2014: 'The development of full artificial intelligence could spell the end of the human race.


Student caught using ChatGPT to write philosophy essay at South Carolina university

Daily Mail - Science & tech

A South Carolina college philosophy professor is warning that we should expect a flood cheating with ChatGPT - a chatbot from OpenAI that's powered by artificial intelligence - after catching one of his students using it to generate an essay. Darren Hick, a philosophy professor at Furman University in Greenville, South Carolina, wrote a lengthy Facebook post this month detailing issues with the advanced chatbot and the'first plagiarist' he'd caught for a recent assignment to write 500 words on Hume and the paradox of horror. ChatGPT, which has been trained on a gigantic sample of text from the internet, can understand human language, conduct conversations with humans and generate detailed text that many have said is human-like and quite impressive. 'ChatGPT responds in seconds with a response that looks like it was written by a human--moreover, a human with a good sense of grammar and an understanding of how essays should be structured,' Hicks wrote. Darren Hick, a philosophy professor at Furman University in Greenville, South Carolina, wrote a lengthy Facebook post this month detailing issues with the advanced chatbot and the'first plagiarist' he'd caught for a recent assignment'The first indicator that I was dealing with A.I. is that, despite the syntactic coherence of the essay, it made no sense.'


The power of human connection in decision-making in a data driven international student recruitment

#artificialintelligence

We need career planners, and not just people to get admission! Thanks to the pandemic, which pushed the limits of online aggregators and EdTech companies in international student recruitment. We are witnessing technology slowly making a powerful impact on student recruitment; though the industry is yet to witness the full power of Artificial Intelligence and Automation. Building AI platforms are going to be cheaper and replicating a technology model doesn't require a big innovation. The industry is going to be dumped with too much data for recruiters and students.